39 research outputs found

    A Comparative Study Of Fuzzy C-Means And K-Means Clustering Techniques

    Get PDF
    Clustering analysis has been considered as a useful means for identifying patterns in dataset. The aim for this paper is to propose a comparison study between two well-known clustering algorithms namely fuzzy c-means (FCM) and k-means. First we present an overview of both methods with emphasis on the implementation of the algorithm. Then, we apply six datasets to measure the quality of clustering result based on the similarity measure used in the algorithm and its representation of clustering result. Next, we also optimize the fuzzification variable, m in FCM algorithm in order to improve the clustering performance. Finally we compare the performance of the experimental result for both method

    A Comparison of Type-1 and Type-2 Fuzzy Logic Controllers in Robotics: A review

    Get PDF
    Most real world applications face high levels of uncertainties that can affect the operations of such applications. Hence, there is a need to develop different approaches that can handle the available uncertainties and reduce their effects on the given application. To date, Type-1 Fuzzy Logic Controllers (FLCs) have been applied with great success to many different real world applications. The traditional type-1 FLC which uses crisp type-1 fuzzy sets cannot handle high levels of uncertainties appropriately. Nevertheless it has been shown that a type-2 FLC using type-2 fuzzy sets can handle such uncertainties better and thus produce a better performance. As such, type-2 FLCs are considered to have the potential to overcome the limitations of type-1 FLCs and produce a new generation of fuzzy controllers with improved performance for many applications which require handling high levels of uncertainty. This paper will briefly introduce the interval type-2 FLC and its benefits. We will also present briefly some of the type-2 FLC real world applications

    Logical Approach: Consistency Rules between Activity Diagram and Class Diagram

    Get PDF
    Requirements engineering (RE) is a fundamental in software development process. Requirements engineering encompasses activities ranging from requirements elicitation and analysis to specification, verification and validation. Poor requirements have been proved to be a major cause of software problems such as cost overruns, delivery delays, failure to meet expectation and degradation. Requirements validation especially models validation has gained quite an interest from a lot of researchers. In recent times, several researchers have expressed a great deal of interest in requirements validation, specifically models validation. The field of research related to consistency checking has undergone a considerable boom from time to time. Numerous methods, approaches and techniques have been recommended to address the requirements inconsistency issues, particularly in models validation. In the software development industry, UML modelling has been extensively used. The different forms of the UML model that characterise the system from various perspectives somehow establish a relation among the models to keep them inseparable from one another. This is the reason why the inconsistency becomes unavoidable. The inconsistency in the models arises when there is an overlap of the elements of the various models representing the different parts of the system and an absence of cooperation. In this paper, the emphasis is given on the consistency rules that exist between the two models. The focus is also on the class diagrams and activity, and the conversion of the rules into logical predicates, where the logical predicates are assessed with a sample case study that constitutes of the two models

    Classification of Landsat 8 Satellite Data Using NDVI Tresholds

    Get PDF
    This study aims to classify Landsat 8 satellite data using NDVI thresholds. Initially, visible and near infrared bands of Landsat 8 satellite were used to derive Normalized Different Vegetation Index (NDVI) image. Vegetation, non-vegetation and water areas were then analyzed where thresholds for separating them are carefully determined with the aid of ground truth information of the study area. Density slicing was performed in order to separate the image into different land covers. Eventually, color mapping and class labeling were done to complete the classification process. The accuracy of the classified image is then assessed using a confusion matrix where overall classification accuracy and Kappa coefficient are computed. The result shows that NDVI-based classification is able to classify the Landsat 8 satellite data with a high accuracy

    Road crack detection using adaptive multi resolution thresholding techniques

    Get PDF
    Machine vision is very important for ensuring the success of intelligent transportation systems, particularly in the area of road maintenance. For this reason, many studies had been focusing on automatic image-based crack detection as a replacement for manual inspection that had depended on the specialist’s knowledge and expertise. In the image processing technique, the pre-processing and edge detection stages are important for filtering out noises and in enhancing the quality of the edges in the image. Since threshold is one of the powerful methods used in the edge detection of an image, we have therefore proposed a modified Otsu-Canny Edge Detection Algorithm in the selection of the two threshold values as well as implemented a multi-resolution level fixed partitioning method in the analysis of the global and local threshold values of the image. This is then followed by a statistical measure in selecting the edge image with the best global threshold. This study had utilized the road crack image dataset that were obtained from Crackforest. The results had revealed the proposed method to not only perform better than the conventional Canny edge detection method but had also shown the maximum value derived from the local threshold of 5x5 partitioned image outperforming the other partitioned scales

    Tiny datablock in saving Hadoop distributed file system wasted memory

    Get PDF
    Hadoop distributed file system (HDFS) is the file system whereby Hadoop is use it to store all the upcoming data inside it. Since it been declared, HDFS is consuming a huge memory amount in order to serve a normal dataset. Nonetheless, the current file saving mechanism in HDFS save only one file in one datablock. Thus, a file with just 5 Mb in size will take up the whole datablock capacity causing the rest of the memory unavailable for other upcoming files, and this is considered a huge waste of memory in serving a normal size dataset. This paper proposed a method called tiny datablock-HDFS (TD-HDFS) to increase the usability of HDFS memory and increase the file hosting capabilities by reducing the datablock size to the minimum capacity, and then merging all the related datablocks into one master datablock. This master datablock consists of tiny virtual datablocks that contain the related small files together; will exploit the full memory of the master datablock. The result of this study is a running HDFS with a minimum amount of wasted memory with the same read/write data performance. The results were examined through a comparison between the standard HDFS file hosting and the proposed solution of this study.TRANSLATE with x EnglishArabicHebrewPolishBulgarianHindiPortugueseCatalanHmong DawRomanianChinese SimplifiedHungarianRussianChinese TraditionalIndonesianSlovakCzechItalianSlovenianDanishJapaneseSpanishDutchKlingonSwedishEnglishKoreanThaiEstonianLatvianTurkishFinnishLithuanianUkrainianFrenchMalayUrduGermanMalteseVietnameseGreekNorwegianWelshHaitian CreolePersian //  TRANSLATE with COPY THE URL BELOW Back EMBED THE SNIPPET BELOW IN YOUR SITE Enable collaborative features and customize widget: Bing Webmaster PortalBack/

    ROBOTICS COMPETITION-BASED LEARNING FOR 21ST CENTURY STEM EDUCATION

    Get PDF
    This article proposes the Robotics Competition-based Learning (R-CBL) in the 21st century learning environment to integrate Science, Technology, Engineering, and Mathematics (STEM) education. This study has an attempt to highlight the connections between the STEM disciplines as well as the integration of 21st-century competencies to spark students’ interest towards the STEM. The problem statement revealed in this article is Malaysia may soon experience a serious human capital deficiency in the STEM field. Thus, it is important to practice a new pedagogy for 21st century integrated STEM education. The objectives of this study is to investigate the effectiveness of R-CBL to increase students’ interest towards STEM in “Robot Olympics Malaysia 2018”. The finding in this research shows the educational robotics as the digital tool can promote STEM learning among secondary school students. In conclusion, by practising Robotics Competition-based Learning in 21st century STEM classroom, learners are expected to have attentiveness in the integrated STEM curriculum. The implication of this pedagogy is to bridge the gap between the expected curriculums, written curriculum, and taught the curriculum. Keywords: robotics, competition, STEM education, 21st centur

    Road Crack Detection Using Adaptive Multi Resolution Thresholding Techniques

    Get PDF
    Machine vision is very important for ensuring the success of intelligent transportation systems, particularly in the area of road maintenance. For this reason, many studies had been focusing on automatic image-based crack detection as a replacement for manual inspection that had depended on the specialist's knowledge and expertise. In the image processing technique, the pre-processing and edge detection stages are important for filtering out noises and in enhancing the quality of the edges in the image. Since threshold is one of the powerful methods used in the edge detection of an image, we have therefore proposed a modified Otsu-Canny Edge Detection Algorithm in the selection of the two threshold values as well as implemented a multi-resolution level fixed partitioning method in the analysis of the global and local threshold values of the image. This is then followed by a statistical measure in selecting the edge image with the best global threshold. This study had utilized the road crack image dataset that were obtained from Crackforest. The results had revealed the proposed method to not only perform better than the conventional Canny edge detection method but had also shown the maximum value derived from the local threshold of 5 Ă— 5 partitioned image outperforming the other partitioned scales

    Extrema Points Application In Determining Iris Region Of Interest

    Get PDF
    Extrema points are usually applied to solve everyday problems, for example, to determine the potential of a created tool and for optimisation. In this study, extrema points were used to help determine the region of interest (ROI) for the iris in iris recognition systems. Iris recognitionis an automated method of biometric identification that uses mathematical pattern-recognition techniques on the images of one or both irises of an individual' seyes, where the complex patterns are unique, stable, and can be seen from a distance. In order to obtain accurate results, the iris must be localised correctly. Hence, to address this issue, this paper proposed a method of iris localisation in the case of ideal and non-ideal iris images. In this study, the algorithm was based on finding the classification for the region of interest (ROI) with the help of a Support Vector Machine (SVM) by applying a histogram of grey level values as a descriptor in each region from the region growing technique. The valid ROI was found from the probabilities graph of the SVM obtained by looking at the global minimum conditions determined by a second derivative model in a graph of functions. Furthermore, the model from the global minimum condition values was used in the test phase, and the results showed that the ROI image obtained helped in the elimination of sensitive noise with the involvement of fewer computations, while reserving relevant information
    corecore